82 research outputs found

    SC-VAE: Sparse Coding-based Variational Autoencoder

    Full text link
    Learning rich data representations from unlabeled data is a key challenge towards applying deep learning algorithms in downstream supervised tasks. Several variants of variational autoencoders have been proposed to learn compact data representaitons by encoding high-dimensional data in a lower dimensional space. Two main classes of VAEs methods may be distinguished depending on the characteristics of the meta-priors that are enforced in the representation learning step. The first class of methods derives a continuous encoding by assuming a static prior distribution in the latent space. The second class of methods learns instead a discrete latent representation using vector quantization (VQ) along with a codebook. However, both classes of methods suffer from certain challenges, which may lead to suboptimal image reconstruction results. The first class of methods suffers from posterior collapse, whereas the second class of methods suffers from codebook collapse. To address these challenges, we introduce a new VAE variant, termed SC-VAE (sparse coding-based VAE), which integrates sparse coding within variational autoencoder framework. Instead of learning a continuous or discrete latent representation, the proposed method learns a sparse data representation that consists of a linear combination of a small number of learned atoms. The sparse coding problem is solved using a learnable version of the iterative shrinkage thresholding algorithm (ISTA). Experiments on two image datasets demonstrate that our model can achieve improved image reconstruction results compared to state-of-the-art methods. Moreover, the use of learned sparse code vectors allows us to perform downstream task like coarse image segmentation through clustering image patches.Comment: 15 pages, 11 figures, and 3 table

    Normative Modeling using Multimodal Variational Autoencoders to Identify Abnormal Brain Structural Patterns in Alzheimer Disease

    Full text link
    Normative modelling is an emerging method for understanding the underlying heterogeneity within brain disorders like Alzheimer Disease (AD) by quantifying how each patient deviates from the expected normative pattern that has been learned from a healthy control distribution. Since AD is a multifactorial disease with more than one biological pathways, multimodal magnetic resonance imaging (MRI) neuroimaging data can provide complementary information about the disease heterogeneity. However, existing deep learning based normative models on multimodal MRI data use unimodal autoencoders with a single encoder and decoder that may fail to capture the relationship between brain measurements extracted from different MRI modalities. In this work, we propose multi-modal variational autoencoder (mmVAE) based normative modelling framework that can capture the joint distribution between different modalities to identify abnormal brain structural patterns in AD. Our multi-modal framework takes as input Freesurfer processed brain region volumes from T1-weighted (cortical and subcortical) and T2-weighed (hippocampal) scans of cognitively normal participants to learn the morphological characteristics of the healthy brain. The estimated normative model is then applied on Alzheimer Disease (AD) patients to quantify the deviation in brain volumes and identify the abnormal brain structural patterns due to the effect of the different AD stages. Our experimental results show that modeling joint distribution between the multiple MRI modalities generates deviation maps that are more sensitive to disease staging within AD, have a better correlation with patient cognition and result in higher number of brain regions with statistically significant deviations compared to a unimodal baseline model with all modalities concatenated as a single input.Comment: Medical Imaging Meets NeurIPS workshop in NeurIPS 202

    The University of Pennsylvania Glioblastoma (UPenn-GBM) cohort: Advanced MRI, clinical, genomics, & radiomics

    Get PDF
    Glioblastoma is the most common aggressive adult brain tumor. Numerous studies have reported results from either private institutional data or publicly available datasets. However, current public datasets are limited in terms of: a) number of subjects, b) lack of consistent acquisition protocol, c) data quality, or d) accompanying clinical, demographic, and molecular information. Toward alleviating these limitations, we contribute the University of Pennsylvania Glioblastoma Imaging, Genomics, and Radiomics (UPenn-GBM) dataset, which describes the currently largest publicly available comprehensive collection of 630 patients diagnosed with de novo glioblastoma. The UPenn-GBM dataset includes (a) advanced multi-parametric magnetic resonance imaging scans acquired during routine clinical practice, at the University of Pennsylvania Health System, (b) accompanying clinical, demographic, and molecular information, (d) perfusion and diffusion derivative volumes, (e) computationally-derived and manually-revised expert annotations of tumor sub-regions, as well as (f) quantitative imaging (also known as radiomic) features corresponding to each of these regions. This collection describes our contribution towards repeatable, reproducible, and comparative quantitative studies leading to new predictive, prognostic, and diagnostic assessments

    Deformable Medical Image Registration: A Survey

    Get PDF
    Deformable image registration is a fundamental task in medical image processing. Among its most important applications, one may cite: i) multi-modality fusion, where information acquired by different imaging devices or protocols is fused to facilitate diagnosis and treatment planning; ii) longitudinal studies, where temporal structural or anatomical changes are investigated; and iii) population modeling and statistical atlases used to study normal anatomical variability. In this technical report, we attempt to give an overview of deformable registration methods, putting emphasis on the most recent advances in the domain. Additional emphasis has been given to techniques applied to medical images. In order to study image registration methods in depth, their main components are identified and studied independently. The most recent techniques are presented in a systematic fashion. The contribution of this technical report is to provide an extensive account of registration techniques in a systematic manner.Le recalage déformable d'images est une des tâches les plus fondamentales dans l'imagerie médicale. Parmi ses applications les plus importantes, on compte: i) la fusion d' information provenant des différents types de modalités a n de faciliter le diagnostic et la planification du traitement; ii) les études longitudinales, oú des changements structurels ou anatomiques sont étudiées en fonction du temps; et iii) la modélisation de la variabilité anatomique normale d'une population et les atlas statistiques. Dans ce rapport de recherche, nous essayons de donner un aperçu des différentes méthodes du recalage déformables, en mettant l'accent sur les avancées les plus récentes du domaine. Nous avons particulièrement insisté sur les techniques appliquées aux images médicales. A n d'étudier les méthodes du recalage d'images, leurs composants principales sont d'abord identifiés puis étudiées de manière indépendante, les techniques les plus récentes étant classifiées en suivant un schéma logique déterminé. La contribution de ce rapport de recherche est de fournir un compte rendu détaillé des techniques de recalage d'une manière systématique

    MRF-based Diffeomorphic Population Deformable Registration & Segmentation

    Get PDF
    In this report, we present a novel framework to deform mutually a population of n-examples based on an optimality criterion. The optimality criterion comprises three terms, one that aims to impose local smoothness, a second that aims to minimize the individual distances between all possible pairs of images, while the last one is a global statistical measurement based on "compactness" criteria. The problem is reformulated using a discrete MRF, where the above constraints are encoded in singleton (global) and pair-wise potentials (smoothness (intra-layer costs) and pair-alignments (inter-layer costs)). Furthermore, we propose a novel grid-based deformation scheme, that guarantees the diffeomorphism of the deformation while being computationally favorable compared to standard deformation methods. Towards addressing important deformations we propose a compositional approach where the deformations are recovered through the sub-optimal solutions of successive discrete MRFs. The resulting paradigm is optimized using efficient linear programming. The proposed framework for the mutual deformation of the images is applied to the group-wise registration problem as well as to an atlas-based population segmentation problem. Both articially generated data with known deformations and real data of medical studies were used for the validation of the method. Promising results demonstrate the potential of our method

    MRI-based classification of IDH mutation and 1p/19q codeletion status of gliomas using a 2.5D hybrid multi-task convolutional neural network

    Get PDF
    Isocitrate dehydrogenase (IDH) mutation and 1p/19q codeletion status are important prognostic markers for glioma. Currently, they are determined using invasive procedures. Our goal was to develop artificial intelligence-based methods to non-invasively determine these molecular alterations from MRI. For this purpose, pre-operative MRI scans of 2648 patients with gliomas (grade II-IV) were collected from Washington University School of Medicine (WUSM; n = 835) and publicly available datasets viz. Brain Tumor Segmentation (BraTS; n = 378), LGG 1p/19q (n = 159), Ivy Glioblastoma Atlas Project (Ivy GAP; n = 41), The Cancer Genome Atlas (TCGA; n = 461), and the Erasmus Glioma Database (EGD; n = 774). A 2.5D hybrid convolutional neural network was proposed to simultaneously localize the tumor and classify its molecular status by leveraging imaging features from MR scans and prior knowledge features from clinical records and tumor location. The models were tested on one internal (TCGA) and two external (WUSM and EGD) test sets. For IDH, the best-performing model achieved areas under the receiver operating characteristic (AUROC) of 0.925, 0.874, 0.933 and areas under the precision-recall curves (AUPRC) of 0.899, 0.702, 0.853 on the internal, WUSM, and EGD test sets, respectively. For 1p/19q, the best model achieved AUROCs of 0.782, 0.754, 0.842, and AUPRCs of 0.588, 0.713, 0.782, on those three data-splits, respectively. The high accuracy of the model on unseen data showcases its generalization capabilities and suggests its potential to perform a 'virtual biopsy' for tailoring treatment planning and overall clinical management of gliomas

    Benchmarking the generalizability of brain age models: Challenges posed by scanner variance and prediction bias

    Get PDF
    Machine learning has been increasingly applied to neuroimaging data to predict age, deriving a personalized biomarker with potential clinical applications. The scientific and clinical value of these models depends on their applicability to independently acquired scans from diverse sources. Accordingly, we evaluated the generalizability of two brain age models that were trained across the lifespan by applying them to three distinct early-life samples with participants aged 8-22 years. These models were chosen based on the size and diversity of their training data, but they also differed greatly in their processing methods and predictive algorithms. Specifically, one brain age model was built by applying gradient tree boosting (GTB) to extracted features of cortical thickness, surface area, and brain volume. The other model applied a 2D convolutional neural network (DBN) to minimally preprocessed slices of T1-weighted scans. Additional model variants were created to understand how generalizability changed when each model was trained with data that became more similar to the test samples in terms of age and acquisition protocols. Our results illustrated numerous trade-offs. The GTB predictions were relatively more accurate overall and yielded more reliable predictions when applied to lower quality scans. In contrast, the DBN displayed the most utility in detecting associations between brain age gaps and cognitive functioning. Broadly speaking, the largest limitations affecting generalizability were acquisition protocol differences and biased brain age estimates. If such confounds could eventually be removed without post-hoc corrections, brain age predictions may have greater utility as personalized biomarkers of healthy aging

    Synthesizing pseudo-T2w images to recapture missing data in neonatal neuroimaging with applications in rs-fMRI

    Get PDF
    T1- and T2-weighted (T1w and T2w) images are essential for tissue classification and anatomical localization in Magnetic Resonance Imaging (MRI) analyses. However, these anatomical data can be challenging to acquire in non-sedated neonatal cohorts, which are prone to high amplitude movement and display lower tissue contrast than adults. As a result, one of these modalities may be missing or of such poor quality that they cannot be used for accurate image processing, resulting in subject loss. While recent literature attempts to overcome these issues in adult populations using synthetic imaging approaches, evaluation of the efficacy of these methods in pediatric populations and the impact of these techniques in conventional MR analyses has not been performed. In this work, we present two novel methods to generate pseudo-T2w images: the first is based in deep learning and expands upon previous models to 3D imaging without the requirement of paired data, the second is based in nonlinear multi-atlas registration providing a computationally lightweight alternative. We demonstrate the anatomical accuracy of pseudo-T2w images and their efficacy in existing MR processing pipelines in two independent neonatal cohorts. Critically, we show that implementing these pseudo-T2w methods in resting-state functional MRI analyses produces virtually identical functional connectivity results when compared to those resulting from T2w images, confirming their utility in infant MRI studies for salvaging otherwise lost subject data

    Integrative Imaging Informatics for Cancer Research: Workflow Automation for Neuro-oncology (I3CR-WANO)

    Full text link
    Efforts to utilize growing volumes of clinical imaging data to generate tumor evaluations continue to require significant manual data wrangling owing to the data heterogeneity. Here, we propose an artificial intelligence-based solution for the aggregation and processing of multisequence neuro-oncology MRI data to extract quantitative tumor measurements. Our end-to-end framework i) classifies MRI sequences using an ensemble classifier, ii) preprocesses the data in a reproducible manner, iii) delineates tumor tissue subtypes using convolutional neural networks, and iv) extracts diverse radiomic features. Moreover, it is robust to missing sequences and adopts an expert-in-the-loop approach, where the segmentation results may be manually refined by radiologists. Following the implementation of the framework in Docker containers, it was applied to two retrospective glioma datasets collected from the Washington University School of Medicine (WUSM; n = 384) and the M.D. Anderson Cancer Center (MDA; n = 30) comprising preoperative MRI scans from patients with pathologically confirmed gliomas. The scan-type classifier yielded an accuracy of over 99%, correctly identifying sequences from 380/384 and 30/30 sessions from the WUSM and MDA datasets, respectively. Segmentation performance was quantified using the Dice Similarity Coefficient between the predicted and expert-refined tumor masks. Mean Dice scores were 0.882 (±\pm0.244) and 0.977 (±\pm0.04) for whole tumor segmentation for WUSM and MDA, respectively. This streamlined framework automatically curated, processed, and segmented raw MRI data of patients with varying grades of gliomas, enabling the curation of large-scale neuro-oncology datasets and demonstrating a high potential for integration as an assistive tool in clinical practice
    • …
    corecore